Second-Order Online Nonconvex Optimization

نویسندگان

چکیده

We present the online Newton's method, a single-step second-order method for nonconvex optimization. analyze its performance and obtain dynamic regret bound that is linear in cumulative variation between round optima. show if optima limited, leads to constant bound. In general case, outperforms convex optimization algorithms functions performs similarly specialized algorithm strongly functions. simulate of on nonlinear, moving target localization example find it first-order approach.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Second Order Cone Programming Relaxation of Nonconvex Quadratic Optimization Problems

A disadvantage of the SDP (semidefinite programming) relaxation method for quadratic and/or combinatorial optimization problems lies in its expensive computational cost. This paper proposes a SOCP (second-order-cone programming) relaxation method, which strengthens the lift-and-project LP (linear programming) relaxation method by adding convex quadratic valid inequalities for the positive semid...

متن کامل

Second-Order Kernel Online Convex Optimization with Adaptive Sketching

Kernel online convex optimization (KOCO) is a framework combining the expressiveness of nonparametric kernel models with the regret guarantees of online learning. First-order KOCO methods such as functional gradient descent require onlyO(t) time and space per iteration, and, when the only information on the losses is their convexity, achieve a minimax optimal O( √ T ) regret. Nonetheless, many ...

متن کامل

A decoupled first/second-order steps technique for nonconvex nonlinear unconstrained optimization with improved complexity bounds

In order to be provably convergent towards a second-order stationary point, optimization methods applied to nonconvex problems must necessarily exploit both first and second-order information. However, as revealed by recent complexity analyzes of some of these methods, the overall effort to reach second-order points is significantly larger when compared to the one of approaching first-order one...

متن کامل

Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solutions for Nonconvex Distributed Optimization

In this work, we study two first-order primal-dual based algorithms, the Gradient Primal-Dual Algorithm (GPDA) and the Gradient Alternating Direction Method of Multipliers (GADMM), for solving a class of linearly constrained non-convex optimization problems. We show that with random initialization of the primal and dual variables, both algorithms are able to compute second-order stationary solu...

متن کامل

Complexity Analysis of Second-order Line-search Algorithms for Smooth Nonconvex Optimization∗

There has been much recent interest in finding unconstrained local minima of smooth functions, due in part of the prevalence of such problems in machine learning and robust statistics. A particular focus is algorithms with good complexity guarantees. Second-order Newton-type methods that make use of regularization and trust regions have been analyzed from such a perspective. More recent proposa...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Automatic Control

سال: 2021

ISSN: ['0018-9286', '1558-2523', '2334-3303']

DOI: https://doi.org/10.1109/tac.2020.3040372